Goto

Collaborating Authors

 sexual abuse


Sam Altman's sister is suing the OpenAI CEO alleging sexual abuse

Engadget

Annie Altman, the sister of OpenAI founder and CEO Sam Altman, has sued her brother accusing him of sexually assaulting her when she was a minor. In a complaint filed this week with a Missouri federal court, Annie Altman alleges her older brother committed "numerous acts of rape, sexual assault, sexual abuse, molestation, sodomy, and battery" from 1997 to 2006, with the abuse starting when she was only three years old. In a joint statement he made alongside his mother and two younger brothers, Sam Altman said "all of [Annie's] claims are utterly untrue." The Altmans say they've tried to support Annie in "many ways" over the years, including by offering direct financial assistance. My sister has filed a lawsuit against me.


OpenAI chief executive Sam Altman accused of sexual abuse by sister in lawsuit

The Guardian

The sister of the OpenAI chief executive, Sam Altman, has filed a lawsuit alleging that he regularly sexually abused her for several years, starting when they were children. The lawsuit filed on 6 January in a US district court in the Eastern District of Missouri alleges that the abuse began when Ann Altman was three and Sam Altman was 12. The filing alleges that the last instance of abuse took place when he was an adult but his sister, known as Annie, was still a child. The chief executive of the ChatGPT developer posted a joint statement on X, which he had signed along with his mother, Connie, and his younger brothers, Max and Jack, denying the allegations and calling them "utterly untrue". "Our family loves Annie and is very concerned about her wellbeing," the statement said.


From spy cams to deepfake porn: fury in South Korea as women targeted again

The Guardian

For the second time in just a few years, South Korean women took to the streets of Seoul to demand an end to sexual abuse. When the country spearheaded Asia's #MeToo movement, the culprit was molka – spy cams used to record women without their knowledge. Now their fury was directed at an epidemic of deepfake pornography. For Juhee Jin, 26, a Seoul resident who advocates for women's rights, the emergence of this new menace, in which women and girls are again the targets, was depressingly predictable. "This should have been addressed a long time ago," says Jin, a translator.


UK watchdog accuses Apple of failing to report sexual images of children

The Guardian

Apple is failing to effectively monitor its platforms or scan for images and videos of the sexual abuse of children, child safety experts allege, which is raising concerns about how the company can handle growth in the volume of such material associated with artificial intelligence. The UK's National Society for the Prevention of Cruelty to Children (NSPCC) accuses Apple of vastly undercounting how often child sexual abuse material (CSAM) appears in its products. In a year, child predators used Apple's iCloud, iMessage and Facetime to store and exchange CSAM in a higher number of cases in England and Wales alone than the company reported across all other countries combined, according to police data obtained by the NSPCC. Through data gathered via freedom of information requests and shared exclusively with the Guardian, the children's charity found Apple was implicated in 337 recorded offenses of child abuse images between April 2022 and March 2023 in England and Wales. In 2023, Apple made just 267 reports of suspected CSAM on its platforms worldwide to the National Center for Missing & Exploited Children (NCMEC), which is in stark contrast to its big tech peers, with Google reporting more than 1.47m and Meta reporting more than 30.6m, per NCMEC's annual report.


AI is overpowering efforts to catch child predators, experts warn

The Guardian

The volume of sexually explicit images of children being generated by predators using artificial intelligence is overwhelming law enforcement's capabilities to identify and rescue real-life victims, child safety experts warn. Prosecutors and child safety groups working to combat crimes against children say AI-generated images have become so lifelike that in some cases it is difficult to determine whether real children have been subjected to real harms for their production. A single AI model can generate tens of thousands of new images in a short amount of time, and this content has begun to flood both the dark web and seep into the mainstream internet. "We are starting to see reports of images that are of a real child but have been AI-generated, but that child was not sexually abused. But now their face is on a child that was abused," said Kristina Korobov, senior attorney at the Zero Abuse Project, a Minnesota-based child safety non-profit.


UK school pupils 'using AI to create indecent imagery of other children'

The Guardian

Children in British schools are using artificial intelligence (AI) to make indecent images of other children, a group of experts on child abuse and technology has warned. They said that a number of schools were reporting for the first time that pupils were using AI-generating technology to create images of children that legally constituted child sexual abuse material. Emma Hardy, UK Safer Internet Centre (UKSIC) director, said the pictures were "terrifyingly" realistic. "The quality of the images that we're seeing is comparable to professional photos taken annually of children in schools up and down the country," said Hardy, who is also the Internet Watch Foundation communications director. "The photo-realistic nature of AI-generated imagery of children means sometimes the children we see are recognisable as victims of previous sexual abuse. "Children must be warned that it can spread across the internet and end up being seen by strangers and sexual predators.


AI-created child sexual abuse images 'threaten to overwhelm internet'

The Guardian

The "worst nightmares" about artificial intelligence-generated child sexual abuse images are coming true and threaten to overwhelm the internet, a safety watchdog has warned. The Internet Watch Foundation (IWF) said it had found nearly 3,000 AI-made abuse images that broke UK law. The UK-based organisation said existing images of real-life abuse victims were being built into AI models, which then produce new depictions of them. It added that the technology was also being used to create images of celebrities who have been "de-aged" and then depicted as children in sexual abuse scenarios. Other examples of child sexual abuse material (CSAM) included using AI tools to "nudify" pictures of clothed children found online.


A deep-learning approach to early identification of suggested sexual harassment from videos

Shetye, Shreya, Maiti, Anwita, Maiti, Tannistha, Singh, Tarry

arXiv.org Artificial Intelligence

Sexual harassment, sexual abuse, and sexual violence are prevalent problems in this day and age. Women's safety is an important issue that needs to be highlighted and addressed. Given this issue, we have studied each of these concerns and the factors that affect it based on images generated from movies. We have classified the three terms (harassment, abuse, and violence) based on the visual attributes present in images depicting these situations. We identified that factors such as facial expression of the victim and perpetrator and unwanted touching had a direct link to identifying the scenes containing sexual harassment, abuse and violence. We also studied and outlined how state-of-the-art explicit content detectors such as Google Cloud Vision API and Clarifai API fail to identify and categorise these images. Based on these definitions and characteristics, we have developed a first-of-its-kind dataset from various Indian movie scenes. These scenes are classified as sexual harassment, sexual abuse, or sexual violence and exported in the PASCAL VOC 1.1 format. Our dataset is annotated on the identified relevant features and can be used to develop and train a deep-learning computer vision model to identify these issues. The dataset is publicly available for research and development.


Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic

TIME - Tech

ChatGPT was hailed as one 2022's most impressive technological innovations upon its release last November. The powerful artificial intelligence (AI) chatbot can generate text on almost any topic or theme, from a Shakespearean sonnet reimagined in the style of Megan Thee Stallion, to complex mathematical theorems described in language a 5 year old can understand. Within a week, it had more than a million users. ChatGPT's creator, OpenAI, is now reportedly in talks with investors to raise funds at a $29 billion valuation, including a potential $10 billion investment by Microsoft. That would make OpenAI, which was founded in San Francisco in 2015 with the aim of building superintelligent machines, one of the world's most valuable AI companies.


Can facial analysis technology create a child-safe internet?

The Guardian

Suppose you pulled out your phone this morning to post a pic to your favourite social network – let's call it Twinstabooktok – and were asked for a selfie before you could log on. The picture you submitted wouldn't be sent anywhere, the service assured you: instead, it would use state-of-the-art machine-learning techniques to work out your age. In all likelihood, once you've submitted the scan, you can continue on your merry way. If the service guessed wrong, you could appeal, though that might take a bit longer. The social network would be able to know that you were an adult user and provide you with an experience largely free of parental controls and paternalist moderation, while children who tried to sign up would be given a restricted version of the same experience.